Goto

Collaborating Authors

 interval distribution




Rescuing neural spike train models from bad MLE

Arribas, Diego M., Zhao, Yuan, Park, Il Memming

arXiv.org Machine Learning

The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to capture important features of the data and even show diverging firing rates. To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels. We develop a method that stochastically optimizes the maximum mean discrepancy induced by the kernel. Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. Using different combinations of spike train kernels, we show that we can control the trade-off between different features which is critical for dealing with model-mismatch.


Maximum Uncertainty Procedures for Interval-Valued Probability Distributions

Pittarelli, Michael

arXiv.org Artificial Intelligence

Measures of uncertainty and divergence are introduced for interval-valued probability distributions and are shown to have desirable mathematical properties. A maximum uncertainty inference procedure for marginal interval distributions is presented. A technique for reconstruction of interval distributions from projections is developed based on this inference procedure. They may represent collections of confidence intervals derived from frequency data, imprecisely stated subjective probabilities, known linear equality or inequality constraints, etc. Thus, interval distributions sometimes provide a more realistic characterization of uncertainty than do real-valued probability distributions.